Declarative Setup

Argo CD applications, projects and settings can be defined declaratively using Kubernetes manifests. These can be updated using kubectl apply, without needing to touch the argocd command-line tool.

Quick Reference

All resources, including Application and AppProject specs, have to be installed in the Argo CD namespace (by default argocd).

Atomic configuration

Sample FileResource NameKindDescription
argocd-cm.yamlargocd-cmConfigMapGeneral Argo CD configuration
argocd-repositories.yamlmy-private-repo / istio-helm-repo / private-helm-repo / private-repoSecretsSample repository connection details
argocd-repo-creds.yamlargoproj-https-creds / argoproj-ssh-creds / github-creds / github-enterprise-credsSecretsSample repository credential templates
argocd-cmd-params-cm.yamlargocd-cmd-params-cmConfigMapArgo CD env variables configuration
argocd-secret.yamlargocd-secretSecretUser Passwords, Certificates (deprecated), Signing Key, Dex secrets, Webhook secrets
argocd-rbac-cm.yamlargocd-rbac-cmConfigMapRBAC Configuration
argocd-tls-certs-cm.yamlargocd-tls-certs-cmConfigMapCustom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later)
argocd-ssh-known-hosts-cm.yamlargocd-ssh-known-hosts-cmConfigMapSSH known hosts data for connecting Git repositories via SSH (v1.2 and later)

For each specific kind of ConfigMap and Secret resource, there is only a single supported resource name (as listed in the above table) - if you need to merge things you need to do it before creating them.

A note about ConfigMap resources

Be sure to annotate your ConfigMap resources using the label app.kubernetes.io/part-of: argocd, otherwise Argo CD will not be able to use them.

Multiple configuration objects

Sample FileKindDescription
application.yamlApplicationExample application spec
project.yamlAppProjectExample project spec
-SecretRepository credentials

For Application and AppProject resources, the name of the resource equals the name of the application or project within Argo CD. This also means that application and project names are unique within a given Argo CD installation - you cannot have the same application name for two different applications.

Applications

The Application CRD is the Kubernetes resource object representing a deployed application instance in an environment. It is defined by two key pieces of information:

  • source reference to the desired state in Git (repository, revision, path, environment)
  • destination reference to the target cluster and namespace. For the cluster one of server or name can be used, but not both (which will result in an error). Under the hood when the server is missing, it is calculated based on the name and used for any operations.

A minimal Application spec is as follows:

  1. apiVersion: argoproj.io/v1alpha1
  2. kind: Application
  3. metadata:
  4. name: guestbook
  5. namespace: argocd
  6. spec:
  7. project: default
  8. source:
  9. repoURL: https://github.com/argoproj/argocd-example-apps.git
  10. targetRevision: HEAD
  11. path: guestbook
  12. destination:
  13. server: https://kubernetes.default.svc
  14. namespace: guestbook

See application.yaml for additional fields. As long as you have completed the first step of Getting Started, you can apply this with kubectl apply -n argocd -f application.yaml and Argo CD will start deploying the guestbook application.

Note

The namespace must match the namespace of your Argo CD instance - typically this is argocd.

Note

When creating an application from a Helm repository, the chart attribute must be specified instead of the path attribute within spec.source.

  1. spec:
  2. source:
  3. repoURL: https://argoproj.github.io/argo-helm
  4. chart: argo

Warning

Without the resources-finalizer.argocd.argoproj.io finalizer, deleting an application will not delete the resources it manages. To perform a cascading delete, you must add the finalizer. See App Deletion.

  1. metadata:
  2. finalizers:
  3. - resources-finalizer.argocd.argoproj.io

App of Apps

You can create an app that creates other apps, which in turn can create other apps. This allows you to declaratively manage a group of apps that can be deployed and configured in concert.

See cluster bootstrapping.

Projects

The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications. It is defined by the following key pieces of information:

  • sourceRepos reference to the repositories that applications within the project can pull manifests from.
  • destinations reference to clusters and namespaces that applications within the project can deploy into.
  • roles list of entities with definitions of their access to resources within the project.

Projects which can deploy to the Argo CD namespace grant admin access

If a Project’s destinations configuration allows deploying to the namespace in which Argo CD is installed, then Applications under that project have admin-level access. RBAC access to admin-level Projects should be carefully restricted, and push access to allowed sourceRepos should be limited to only admins.

An example spec is as follows:

  1. apiVersion: argoproj.io/v1alpha1
  2. kind: AppProject
  3. metadata:
  4. name: my-project
  5. namespace: argocd
  6. # Finalizer that ensures that project is not deleted until it is not referenced by any application
  7. finalizers:
  8. - resources-finalizer.argocd.argoproj.io
  9. spec:
  10. description: Example Project
  11. # Allow manifests to deploy from any Git repos
  12. sourceRepos:
  13. - '*'
  14. # Only permit applications to deploy to the guestbook namespace in the same cluster
  15. destinations:
  16. - namespace: guestbook
  17. server: https://kubernetes.default.svc
  18. # Deny all cluster-scoped resources from being created, except for Namespace
  19. clusterResourceWhitelist:
  20. - group: ''
  21. kind: Namespace
  22. # Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy
  23. namespaceResourceBlacklist:
  24. - group: ''
  25. kind: ResourceQuota
  26. - group: ''
  27. kind: LimitRange
  28. - group: ''
  29. kind: NetworkPolicy
  30. # Deny all namespaced-scoped resources from being created, except for Deployment and StatefulSet
  31. namespaceResourceWhitelist:
  32. - group: 'apps'
  33. kind: Deployment
  34. - group: 'apps'
  35. kind: StatefulSet
  36. roles:
  37. # A role which provides read-only access to all applications in the project
  38. - name: read-only
  39. description: Read-only privileges to my-project
  40. policies:
  41. - p, proj:my-project:read-only, applications, get, my-project/*, allow
  42. groups:
  43. - my-oidc-group
  44. # A role which provides sync privileges to only the guestbook-dev application, e.g. to provide
  45. # sync privileges to a CI system
  46. - name: ci-role
  47. description: Sync privileges for guestbook-dev
  48. policies:
  49. - p, proj:my-project:ci-role, applications, sync, my-project/guestbook-dev, allow
  50. # NOTE: JWT tokens can only be generated by the API server and the token is not persisted
  51. # anywhere by Argo CD. It can be prematurely revoked by removing the entry from this list.
  52. jwtTokens:
  53. - iat: 1535390316

Repositories

Note

Some Git hosters - notably GitLab and possibly on-premise GitLab instances as well - require you to specify the .git suffix in the repository URL, otherwise they will send a HTTP 301 redirect to the repository URL suffixed with .git. Argo CD will not follow these redirects, so you have to adjust your repository URL to be suffixed with .git.

Repository details are stored in secrets. To configure a repo, create a secret which contains repository details. Consider using bitnami-labs/sealed-secrets to store an encrypted secret definition as a Kubernetes manifest. Each repository must have a url field and, depending on whether you connect using HTTPS, SSH, or GitHub App, username and password (for HTTPS), sshPrivateKey (for SSH), or githubAppPrivateKey (for GitHub App).

Warning

When using bitnami-labs/sealed-secrets the labels will be removed and have to be readded as described here: https://github.com/bitnami-labs/sealed-secrets#sealedsecrets-as-templates-for-secrets

Example for HTTPS:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: private-repo
  5. namespace: argocd
  6. labels:
  7. argocd.argoproj.io/secret-type: repository
  8. stringData:
  9. type: git
  10. url: https://github.com/argoproj/private-repo
  11. password: my-password
  12. username: my-username

Example for SSH:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: private-repo
  5. namespace: argocd
  6. labels:
  7. argocd.argoproj.io/secret-type: repository
  8. stringData:
  9. type: git
  10. url: git@github.com:argoproj/my-private-repository.git
  11. sshPrivateKey: |
  12. -----BEGIN OPENSSH PRIVATE KEY-----
  13. ...
  14. -----END OPENSSH PRIVATE KEY-----

Example for GitHub App:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: github-repo
  5. namespace: argocd
  6. labels:
  7. argocd.argoproj.io/secret-type: repository
  8. stringData:
  9. type: git
  10. url: https://github.com/argoproj/my-private-repository
  11. githubAppID: 1
  12. githubAppInstallationID: 2
  13. githubAppPrivateKey: |
  14. -----BEGIN OPENSSH PRIVATE KEY-----
  15. ...
  16. -----END OPENSSH PRIVATE KEY-----
  17. ---
  18. apiVersion: v1
  19. kind: Secret
  20. metadata:
  21. name: github-enterprise-repo
  22. namespace: argocd
  23. labels:
  24. argocd.argoproj.io/secret-type: repository
  25. stringData:
  26. type: git
  27. url: https://ghe.example.com/argoproj/my-private-repository
  28. githubAppID: 1
  29. githubAppInstallationID: 2
  30. githubAppEnterpriseBaseUrl: https://ghe.example.com/api/v3
  31. githubAppPrivateKey: |
  32. -----BEGIN OPENSSH PRIVATE KEY-----
  33. ...
  34. -----END OPENSSH PRIVATE KEY-----

Example for Google Cloud Source repositories:

  1. kind: Secret
  2. metadata:
  3. name: github-repo
  4. namespace: argocd
  5. labels:
  6. argocd.argoproj.io/secret-type: repository
  7. stringData:
  8. type: git
  9. url: https://source.developers.google.com/p/my-google-project/r/my-repo
  10. gcpServiceAccountKey: |
  11. {
  12. "type": "service_account",
  13. "project_id": "my-google-project",
  14. "private_key_id": "REDACTED",
  15. "private_key": "-----BEGIN PRIVATE KEY-----\nREDACTED\n-----END PRIVATE KEY-----\n",
  16. "client_email": "argocd-service-account@my-google-project.iam.gserviceaccount.com",
  17. "client_id": "REDACTED",
  18. "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  19. "token_uri": "https://oauth2.googleapis.com/token",
  20. "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  21. "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/argocd-service-account%40my-google-project.iam.gserviceaccount.com"
  22. }

Tip

The Kubernetes documentation has instructions for creating a secret containing a private key.

Repository Credentials

If you want to use the same credentials for multiple repositories, you can configure credential templates. Credential templates can carry the same credentials information as repositories.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: first-repo
  5. namespace: argocd
  6. labels:
  7. argocd.argoproj.io/secret-type: repository
  8. stringData:
  9. type: git
  10. url: https://github.com/argoproj/private-repo
  11. ---
  12. apiVersion: v1
  13. kind: Secret
  14. metadata:
  15. name: second-repo
  16. namespace: argocd
  17. labels:
  18. argocd.argoproj.io/secret-type: repository
  19. stringData:
  20. type: git
  21. url: https://github.com/argoproj/other-private-repo
  22. ---
  23. apiVersion: v1
  24. kind: Secret
  25. metadata:
  26. name: private-repo-creds
  27. namespace: argocd
  28. labels:
  29. argocd.argoproj.io/secret-type: repo-creds
  30. stringData:
  31. type: git
  32. url: https://github.com/argoproj
  33. password: my-password
  34. username: my-username

In the above example, every repository accessed via HTTPS whose URL is prefixed with https://github.com/argoproj would use a username stored in the key username and a password stored in the key password of the secret private-repo-creds for connecting to Git.

In order for Argo CD to use a credential template for any given repository, the following conditions must be met:

  • The repository must either not be configured at all, or if configured, must not contain any credential information (i.e. contain none of sshPrivateKey, username, password )
  • The URL configured for a credential template (e.g. https://github.com/argoproj) must match as prefix for the repository URL (e.g. https://github.com/argoproj/argocd-example-apps).

Note

Matching credential template URL prefixes is done on a best match effort, so the longest (best) match will take precedence. The order of definition is not important, as opposed to pre v1.4 configuration.

The following keys are valid to refer to credential secrets:

SSH repositories

  • sshPrivateKey refers to the SSH private key for accessing the repositories

HTTPS repositories

  • username and password refer to the username and/or password for accessing the repositories
  • tlsClientCertData and tlsClientCertKey refer to secrets where a TLS client certificate (tlsClientCertData) and the corresponding private key tlsClientCertKey are stored for accessing the repositories

GitHub App repositories

  • githubAppPrivateKey refers to the GitHub App private key for accessing the repositories
  • githubAppID refers to the GitHub Application ID for the application you created.
  • githubAppInstallationID refers to the Installation ID of the GitHub app you created and installed.
  • githubAppEnterpriseBaseUrl refers to the base api URL for GitHub Enterprise (e.g. https://ghe.example.com/api/v3)
  • tlsClientCertData and tlsClientCertKey refer to secrets where a TLS client certificate (tlsClientCertData) and the corresponding private key tlsClientCertKey are stored for accessing GitHub Enterprise if custom certificates are used.

Repositories using self-signed TLS certificates (or are signed by custom CA)

You can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named argocd-tls-certs-cm. The data section should contain a map, with the repository server’s hostname part (not the complete URL) as key, and the certificate(s) in PEM format as data. So, if you connect to a repository with the URL https://server.example.com/repos/my-repo, you should use server.example.com as key. The certificate data should be either the server’s certificate (in case of self-signed certificate) or the certificate of the CA that was used to sign the server’s certificate. You can configure multiple certificates for each server, e.g. if you are having a certificate roll-over planned.

If there are no dedicated certificates configured for a repository server, the system’s default trust store is used for validating the server’s repository. This should be good enough for most (if not all) public Git repository services such as GitLab, GitHub and Bitbucket as well as most privately hosted sites which use certificates from well-known CAs, including Let’s Encrypt certificates.

An example ConfigMap object:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: argocd-tls-certs-cm
  5. namespace: argocd
  6. labels:
  7. app.kubernetes.io/name: argocd-cm
  8. app.kubernetes.io/part-of: argocd
  9. data:
  10. server.example.com: |
  11. -----BEGIN CERTIFICATE-----
  12. MIIF1zCCA7+gAwIBAgIUQdTcSHY2Sxd3Tq/v1eIEZPCNbOowDQYJKoZIhvcNAQEL
  13. BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE
  14. BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0
  15. c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda
  16. Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT
  17. YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES
  18. MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi
  19. MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5
  20. NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts+WSAFA6oANvbzlu3JrEQ2CHPc
  21. CNQm6diPREFwcDPFCe/eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR+jtvx+/u
  22. P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G
  23. ZJIRn/OfSz7NzKylfDCat2z3EAutyeT/5oXZoWOmGg/8T7pn/pR588GoYYKRQnp+
  24. YilqCPFX+az09EqqK/iHXnkdZ/Z2fCuU+9M/Zhrnlwlygl3RuVBI6xhm/ZsXtL2E
  25. Gxa61lNy6pyx5+hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko
  26. Ml1L4zCU+xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD+kMpNm6J
  27. kKC1li/Iy+RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV/4u
  28. kD1n4p/XMc9HYU/was/CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO
  29. gxhHxxzRWxd24uLyk9Px+9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7
  30. bEH4Jatp/10zkco+UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86
  31. r6+x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6+x1ewwDwYDVR0TAQH/
  32. BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy+Z0xuHSxhTd0jfn
  33. Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ/XjgxnkiRACXh7K9hsG2r+ageRWGevyvx
  34. CaRXFbherV1kTnZw4Y9/pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii/StENAz2
  35. XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT
  36. +TuXwZgqnTOq6piVgr+FTSa24qSvaXb5z/mJDLlk23npecTouLg83TNSn3R6fYQr
  37. d/Y9eXuUJ8U7/qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9/Ajlv5OtO
  38. OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla+v+HZcFW2AO9so
  39. 6ZdVHHQnIbJa7yQJKZ+hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr
  40. jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY+PYYd7bdRIjN7WNZLFsgplnS8
  41. 9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck+FTv+L7DAmg6l37W
  42. +LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME+r4nXtT1S/qLR6ASWUS4ViWRhbRlNK
  43. XWyb96wrUlv+E8I=
  44. -----END CERTIFICATE-----

Note

The argocd-tls-certs-cm ConfigMap will be mounted as a volume at the mount path /app/config/tls in the pods of argocd-server and argocd-repo-server. It will create files for each data key in the mount path directory, so above example would leave the file /app/config/tls/server.example.com, which contains the certificate data. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.

SSH known host public keys

If you are configuring repositories to use SSH, Argo CD will need to know their SSH public keys. In order for Argo CD to connect via SSH the public key(s) for each repository server must be pre-configured in Argo CD (unlike TLS configuration), otherwise the connections to the repository will fail.

You can manage the SSH known hosts data in the argocd-ssh-known-hosts-cm ConfigMap. This ConfigMap contains a single entry, ssh_known_hosts, with the public keys of the SSH servers as its value. The value can be filled in from any existing ssh_known_hosts file, or from the output of the ssh-keyscan utility (which is part of OpenSSH’s client package). The basic format is <server_name> <keytype> <base64-encoded_key>, one entry per line.

Here is an example of running ssh-keyscan:

  1. $ for host in bitbucket.org github.com gitlab.com ssh.dev.azure.com vs-ssh.visualstudio.com ; do ssh-keyscan $host 2> /dev/null ; done
  2. bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=
  3. github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
  4. github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
  5. github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
  6. gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
  7. gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
  8. gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
  9. ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
  10. vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H

Here is an example ConfigMap object using the output from ssh-keyscan above:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. labels:
  5. app.kubernetes.io/name: argocd-ssh-known-hosts-cm
  6. app.kubernetes.io/part-of: argocd
  7. name: argocd-ssh-known-hosts-cm
  8. data:
  9. ssh_known_hosts: |
  10. # This file was automatically generated by hack/update-ssh-known-hosts.sh. DO NOT EDIT
  11. [ssh.github.com]:443 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
  12. [ssh.github.com]:443 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
  13. [ssh.github.com]:443 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
  14. bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE=
  15. bitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO
  16. bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=
  17. github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
  18. github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
  19. github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
  20. gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
  21. gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
  22. gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
  23. ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
  24. vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H

Note

The argocd-ssh-known-hosts-cm ConfigMap will be mounted as a volume at the mount path /app/config/ssh in the pods of argocd-server and argocd-repo-server. It will create a file ssh_known_hosts in that directory, which contains the SSH known hosts data used by Argo CD for connecting to Git repositories via SSH. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.

Configure repositories with proxy

Proxy for your repository can be specified in the proxy field of the repository secret, along with other repository configurations. Argo CD uses this proxy to access the repository. Argo CD looks for the standard proxy environment variables in the repository server if the custom proxy is absent.

An example repository with proxy:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: private-repo
  5. namespace: argocd
  6. labels:
  7. argocd.argoproj.io/secret-type: repository
  8. stringData:
  9. type: git
  10. url: https://github.com/argoproj/private-repo
  11. proxy: https://proxy-server-url:8888
  12. password: my-password
  13. username: my-username

Legacy behaviour

In Argo CD version 2.0 and earlier, repositories were stored as part of the argocd-cm config map. For backward-compatibility, Argo CD will still honor repositories in the config map, but this style of repository configuration is deprecated and support for it will be removed in a future version.

  1. apiVersion: v1
  2. kind: ConfigMap
  3. data:
  4. repositories: |
  5. - url: https://github.com/argoproj/my-private-repository
  6. passwordSecret:
  7. name: my-secret
  8. key: password
  9. usernameSecret:
  10. name: my-secret
  11. key: username
  12. repository.credentials: |
  13. - url: https://github.com/argoproj
  14. passwordSecret:
  15. name: my-secret
  16. key: password
  17. usernameSecret:
  18. name: my-secret
  19. key: username
  20. ---
  21. apiVersion: v1
  22. kind: Secret
  23. metadata:
  24. name: my-secret
  25. namespace: argocd
  26. stringData:
  27. password: my-password
  28. username: my-username

Clusters

Cluster credentials are stored in secrets same as repositories or repository credentials. Each secret must have label argocd.argoproj.io/secret-type: cluster.

The secret data must include following fields:

  • name - cluster name
  • server - cluster api server url
  • namespaces - optional comma-separated list of namespaces which are accessible in that cluster. Cluster level resources would be ignored if namespace list is not empty.
  • clusterResources - optional boolean string ("true" or "false") determining whether Argo CD can manage cluster-level resources on this cluster. This setting is used only if the list of managed namespaces is not empty.
  • project - optional string to designate this as a project-scoped cluster.
  • config - JSON representation of following data structure:
  1. # Basic authentication settings
  2. username: string
  3. password: string
  4. # Bearer authentication settings
  5. bearerToken: string
  6. # IAM authentication configuration
  7. awsAuthConfig:
  8. clusterName: string
  9. roleARN: string
  10. profile: string
  11. # Configure external command to supply client credentials
  12. # See https://godoc.org/k8s.io/client-go/tools/clientcmd/api#ExecConfig
  13. execProviderConfig:
  14. command: string
  15. args: [
  16. string
  17. ]
  18. env: {
  19. key: value
  20. }
  21. apiVersion: string
  22. installHint: string
  23. # Transport layer security configuration settings
  24. tlsClientConfig:
  25. # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
  26. caData: string
  27. # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
  28. certData: string
  29. # Server should be accessed without verifying the TLS certificate
  30. insecure: boolean
  31. # Base64 encoded PEM-encoded bytes (typically read from a client certificate key file).
  32. keyData: string
  33. # ServerName is passed to the server for SNI and is used in the client to check server
  34. # certificates against. If ServerName is empty, the hostname used to contact the
  35. # server is used.
  36. serverName: string

Note that if you specify a command to run under execProviderConfig, that command must be available in the Argo CD image. See BYOI (Build Your Own Image).

Cluster secret example:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mycluster-secret
  5. labels:
  6. argocd.argoproj.io/secret-type: cluster
  7. type: Opaque
  8. stringData:
  9. name: mycluster.example.com
  10. server: https://mycluster.example.com
  11. config: |
  12. {
  13. "bearerToken": "<authentication token>",
  14. "tlsClientConfig": {
  15. "insecure": false,
  16. "caData": "<base64 encoded certificate>"
  17. }
  18. }

EKS

EKS cluster secret example using argocd-k8s-auth and IRSA:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mycluster-secret
  5. labels:
  6. argocd.argoproj.io/secret-type: cluster
  7. type: Opaque
  8. stringData:
  9. name: "mycluster.example.com"
  10. server: "https://mycluster.example.com"
  11. config: |
  12. {
  13. "awsAuthConfig": {
  14. "clusterName": "my-eks-cluster-name",
  15. "roleARN": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>"
  16. },
  17. "tlsClientConfig": {
  18. "insecure": false,
  19. "caData": "<base64 encoded certificate>"
  20. }
  21. }

Note that you should have IRSA enabled on your EKS cluster, create an appropriate IAM role which allows it to assume other IAM roles (whichever roleARNs that Argo CD needs to assume) and have an assume role policy which allows the argocd-application-controller and argocd-server pods to assume said role via OIDC.

Example trust relationship config for <arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>, which is required for Argo CD to perform actions via IAM. Ensure that the cluster has an IAM OIDC provider configured for it.

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Effect": "Allow",
  6. "Principal": {
  7. "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
  8. },
  9. "Action": "sts:AssumeRoleWithWebIdentity",
  10. "Condition": {
  11. "StringEquals": {
  12. "oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": ["system:serviceaccount:argocd:argocd-application-controller", "system:serviceaccount:argocd:argocd-server"],
  13. "oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
  14. }
  15. }
  16. }
  17. ]
  18. }

The Argo CD management role also needs to be allowed to assume other roles, in this case we want it to assume arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME> so that it can manage the cluster mapped to that role. This can be extended to allow assumption of multiple roles, either as an explicit array of role ARNs or by using * where appropriate.

  1. {
  2. "Version" : "2012-10-17",
  3. "Statement" : {
  4. "Effect" : "Allow",
  5. "Action" : "sts:AssumeRole",
  6. "Resource" : [
  7. "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>"
  8. ]
  9. }
  10. }

Example service account configs for argocd-application-controller and argocd-server.

Warning

Once the annotations have been set on the service accounts, both the application controller and server pods need to be restarted.

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. annotations:
  5. eks.amazonaws.com/role-arn: "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>"
  6. name: argocd-application-controller
  7. ---
  8. apiVersion: v1
  9. kind: ServiceAccount
  10. metadata:
  11. annotations:
  12. eks.amazonaws.com/role-arn: "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>"
  13. name: argocd-server

In turn, the roleARN of each managed cluster needs to be added to each respective cluster’s aws-auth config map (see Enabling IAM principal access to your cluster), as well as having an assume role policy which allows it to be assumed by the Argo CD pod role.

Example assume role policy for a cluster which is managed by Argo CD:

  1. {
  2. "Version" : "2012-10-17",
  3. "Statement" : {
  4. "Effect" : "Allow",
  5. "Action" : "sts:AssumeRole",
  6. "Principal" : {
  7. "AWS" : "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>"
  8. }
  9. }
  10. }

Example kube-system/aws-auth configmap for your cluster managed by Argo CD:

  1. apiVersion: v1
  2. data:
  3. # Other groups and accounts omitted for brevity. Ensure that no other rolearns and/or groups are inadvertently removed,
  4. # or you risk borking access to your cluster.
  5. #
  6. # The group name is a RoleBinding which you use to map to a [Cluster]Role. See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-binding-examples
  7. mapRoles: |
  8. - "groups":
  9. - "<GROUP-NAME-IN-K8S-RBAC>"
  10. "rolearn": "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>"
  11. "username": "<some-username>"

Alternative EKS Authentication Methods

In some scenarios it may not be possible to use IRSA, such as when the Argo CD cluster is running on a different cloud provider’s platform. In this case, there are two options: 1. Use execProviderConfig to call the AWS authentication mechanism which enables the injection of environment variables to supply credentials 2. Leverage the new AWS profile option available in Argo CD release 2.10

Both of these options will require the steps involving IAM and the aws-auth config map (defined above) to provide the principal with access to the cluster.

Using execProviderConfig with Environment Variables
  1. ---
  2. apiVersion: v1
  3. kind: Secret
  4. metadata:
  5. name: mycluster-secret
  6. labels:
  7. argocd.argoproj.io/secret-type: cluster
  8. type: Opaque
  9. stringData:
  10. name: mycluster
  11. server: https://mycluster.example.com
  12. namespaces: "my,managed,namespaces"
  13. clusterResources: "true"
  14. config: |
  15. {
  16. "execProviderConfig": {
  17. "command": "argocd-k8s-auth",
  18. "args": ["aws", "--cluster-name", "my-eks-cluster"],
  19. "apiVersion": "client.authentication.k8s.io/v1beta1",
  20. "env": {
  21. "AWS_REGION": "xx-east-1",
  22. "AWS_ACCESS_KEY_ID": "{{ .aws_key_id }}",
  23. "AWS_SECRET_ACCESS_KEY": "{{ .aws_key_secret }}",
  24. "AWS_SESSION_TOKEN": "{{ .aws_token }}"
  25. }
  26. },
  27. "tlsClientConfig": {
  28. "insecure": false,
  29. "caData": "{{ .cluster_cert }}"
  30. }
  31. }

This example assumes that the role being attached to the credentials that have been supplied, if this is not the case the role can be appended to the args section like so:

  1. ...
  2. "args": ["aws", "--cluster-name", "my-eks-cluster", "--roleARN", "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>"],
  3. ...

This construct can be used in conjunction with something like the External Secrets Operator to avoid storing the keys in plain text and additionally helps to provide a foundation for key rotation.

Using An AWS Profile For Authentication

The option to use profiles, added in release 2.10, provides a method for supplying credentials while still using the standard Argo CD EKS cluster declaration with an additional command flag that points to an AWS credentials file:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mycluster-secret
  5. labels:
  6. argocd.argoproj.io/secret-type: cluster
  7. type: Opaque
  8. stringData:
  9. name: "mycluster.com"
  10. server: "https://mycluster.com"
  11. config: |
  12. {
  13. "awsAuthConfig": {
  14. "clusterName": "my-eks-cluster-name",
  15. "roleARN": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>",
  16. "profile": "/mount/path/to/my-profile-file"
  17. },
  18. "tlsClientConfig": {
  19. "insecure": false,
  20. "caData": "<base64 encoded certificate>"
  21. }
  22. }

This will instruct ArgoCD to read the file at the provided path and use the credentials defined within to authenticate to AWS. The profile must be mounted in order for this to work. For example, the following values can be defined in a Helm based ArgoCD deployment:

  1. controller:
  2. extraVolumes:
  3. - name: my-profile-volume
  4. secret:
  5. secretName: my-aws-profile
  6. items:
  7. - key: my-profile-file
  8. path: my-profile-file
  9. extraVolumeMounts:
  10. - name: my-profile-mount
  11. mountPath: /mount/path/to
  12. readOnly: true
  13. server:
  14. extraVolumes:
  15. - name: my-profile-volume
  16. secret:
  17. secretName: my-aws-profile
  18. items:
  19. - key: my-profile-file
  20. path: my-profile-file
  21. extraVolumeMounts:
  22. - name: my-profile-mount
  23. mountPath: /mount/path/to
  24. readOnly: true

Where the secret is defined as follows:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: my-aws-profile
  5. type: Opaque
  6. stringData:
  7. my-profile-file: |
  8. [default]
  9. region = <aws_region>
  10. aws_access_key_id = <aws_access_key_id>
  11. aws_secret_access_key = <aws_secret_access_key>
  12. aws_session_token = <aws_session_token>

⚠️ Secret mounts are updated on an interval, not real time. If rotation is a requirement ensure the token lifetime outlives the mount update interval and the rotation process doesn’t immediately invalidate the existing token

GKE

GKE cluster secret example using argocd-k8s-auth and Workload Identity:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mycluster-secret
  5. labels:
  6. argocd.argoproj.io/secret-type: cluster
  7. type: Opaque
  8. stringData:
  9. name: mycluster.example.com
  10. server: https://mycluster.example.com
  11. config: |
  12. {
  13. "execProviderConfig": {
  14. "command": "argocd-k8s-auth",
  15. "args": ["gcp"],
  16. "apiVersion": "client.authentication.k8s.io/v1beta1"
  17. },
  18. "tlsClientConfig": {
  19. "insecure": false,
  20. "caData": "<base64 encoded certificate>"
  21. }
  22. }

Note that you must enable Workload Identity on your GKE cluster, create GCP service account with appropriate IAM role and bind it to Kubernetes service account for argocd-application-controller and argocd-server (showing Pod logs on UI). See Use Workload Identity and Authenticating to the Kubernetes API server.

AKS

Azure cluster secret example using argocd-k8s-auth and kubelogin. The option azure to the argocd-k8s-auth execProviderConfig encapsulates the get-token command for kubelogin. Depending upon which authentication flow is desired (devicecode, spn, ropc, msi, azurecli, workloadidentity), set the environment variable AAD_LOGIN_METHOD with this value. Set other appropriate environment variables depending upon which authentication flow is desired.

Variable NameDescription
AAD_LOGIN_METHODOne of devicecode, spn, ropc, msi, azurecli, or workloadidentity
AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATEAAD client cert in pfx. Used in spn login
AAD_SERVICE_PRINCIPAL_CLIENT_IDAAD client application ID
AAD_SERVICE_PRINCIPAL_CLIENT_SECRETAAD client application secret
AAD_USER_PRINCIPAL_NAMEUsed in the ropc flow
AAD_USER_PRINCIPAL_PASSWORDUsed in the ropc flow
AZURE_TENANT_IDThe AAD tenant ID.
AZURE_AUTHORITY_HOSTUsed in the WorkloadIdentityLogin flow
AZURE_FEDERATED_TOKEN_FILEUsed in the WorkloadIdentityLogin flow
AZURE_CLIENT_IDUsed in the WorkloadIdentityLogin flow

In addition to the environment variables above, argocd-k8s-auth accepts two extra environment variables to set the AAD environment, and to set the AAD server application ID. The AAD server application ID will default to 6dae42f8-4368-4678-94ff-3960e28e3630 if not specified. See here for details.

Variable NameDescription
AAD_ENVIRONMENT_NAMEThe azure environment to use, default of AzurePublicCloud
AAD_SERVER_APPLICATION_IDThe optional AAD server application ID, defaults to 6dae42f8-4368-4678-94ff-3960e28e3630

This is an example of using the federated workload login flow. The federated token file needs to be mounted as a secret into argoCD, so it can be used in the flow. The location of the token file needs to be set in the environment variable AZURE_FEDERATED_TOKEN_FILE.

If your AKS cluster utilizes the Mutating Admission Webhook from the Azure Workload Identity project, follow these steps to enable the argocd-application-controller and argocd-server pods to use the federated identity:

  1. Label the Pods: Add the azure.workload.identity/use: "true" label to the argocd-application-controller and argocd-server pods.

  2. Create Federated Identity Credential: Generate an Azure federated identity credential for the argocd-application-controller and argocd-server service accounts. Refer to the Federated Identity Credential documentation for detailed instructions.

  3. Add Annotations to Service Account Add "azure.workload.identity/client-id": "$CLIENT_ID" and "azure.workload.identity/tenant-id": "$TENANT_ID" annotations to the argocd-application-controller and argocd-server service accounts using the details from the federated credential.

  4. Set the AZURE_CLIENT_ID: Update the AZURE_CLIENT_ID in the cluster secret to match the client id of the newly created federated identity credential.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mycluster-secret
  5. labels:
  6. argocd.argoproj.io/secret-type: cluster
  7. type: Opaque
  8. stringData:
  9. name: mycluster.example.com
  10. server: https://mycluster.example.com
  11. config: |
  12. {
  13. "execProviderConfig": {
  14. "command": "argocd-k8s-auth",
  15. "env": {
  16. "AAD_ENVIRONMENT_NAME": "AzurePublicCloud",
  17. "AZURE_CLIENT_ID": "fill in client id",
  18. "AZURE_TENANT_ID": "fill in tenant id", # optional, injected by workload identity mutating admission webhook if enabled
  19. "AZURE_FEDERATED_TOKEN_FILE": "/opt/path/to/federated_file.json", # optional, injected by workload identity mutating admission webhook if enabled
  20. "AZURE_AUTHORITY_HOST": "https://login.microsoftonline.com/", # optional, injected by workload identity mutating admission webhook if enabled
  21. "AAD_LOGIN_METHOD": "workloadidentity"
  22. },
  23. "args": ["azure"],
  24. "apiVersion": "client.authentication.k8s.io/v1beta1"
  25. },
  26. "tlsClientConfig": {
  27. "insecure": false,
  28. "caData": "<base64 encoded certificate>"
  29. }
  30. }

This is an example of using the spn (service principal name) flow.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mycluster-secret
  5. labels:
  6. argocd.argoproj.io/secret-type: cluster
  7. type: Opaque
  8. stringData:
  9. name: mycluster.example.com
  10. server: https://mycluster.example.com
  11. config: |
  12. {
  13. "execProviderConfig": {
  14. "command": "argocd-k8s-auth",
  15. "env": {
  16. "AAD_ENVIRONMENT_NAME": "AzurePublicCloud",
  17. "AAD_SERVICE_PRINCIPAL_CLIENT_SECRET": "fill in your service principal client secret",
  18. "AZURE_TENANT_ID": "fill in tenant id",
  19. "AAD_SERVICE_PRINCIPAL_CLIENT_ID": "fill in your service principal client id",
  20. "AAD_LOGIN_METHOD": "spn"
  21. },
  22. "args": ["azure"],
  23. "apiVersion": "client.authentication.k8s.io/v1beta1"
  24. },
  25. "tlsClientConfig": {
  26. "insecure": false,
  27. "caData": "<base64 encoded certificate>"
  28. }
  29. }

Helm Chart Repositories

Non standard Helm Chart repositories have to be registered explicitly. Each repository must have url, type and name fields. For private Helm repos you may need to configure access credentials and HTTPS settings using username, password, tlsClientCertData and tlsClientCertKey fields.

Example:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: istio
  5. namespace: argocd
  6. labels:
  7. argocd.argoproj.io/secret-type: repository
  8. stringData:
  9. name: istio.io
  10. url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
  11. type: helm
  12. ---
  13. apiVersion: v1
  14. kind: Secret
  15. metadata:
  16. name: argo-helm
  17. namespace: argocd
  18. labels:
  19. argocd.argoproj.io/secret-type: repository
  20. stringData:
  21. name: argo
  22. url: https://argoproj.github.io/argo-helm
  23. type: helm
  24. username: my-username
  25. password: my-password
  26. tlsClientCertData: ...
  27. tlsClientCertKey: ...

Resource Exclusion/Inclusion

Resources can be excluded from discovery and sync so that Argo CD is unaware of them. For example, the apiGroup/kind events.k8s.io/*, metrics.k8s.io/*, coordination.k8s.io/Lease, and ""/Endpoints are always excluded. Use cases:

  • You have temporal issues and you want to exclude problematic resources.
  • There are many of a kind of resources that impacts Argo CD’s performance.
  • Restrict Argo CD’s access to certain kinds of resources, e.g. secrets. See security.md#cluster-rbac.

To configure this, edit the argocd-cm config map:

  1. kubectl edit configmap argocd-cm -n argocd

Add resource.exclusions, e.g.:

  1. apiVersion: v1
  2. data:
  3. resource.exclusions: |
  4. - apiGroups:
  5. - "*"
  6. kinds:
  7. - "*"
  8. clusters:
  9. - https://192.168.0.20
  10. kind: ConfigMap

The resource.exclusions node is a list of objects. Each object can have:

  • apiGroups A list of globs to match the API group.
  • kinds A list of kinds to match. Can be "*" to match all.
  • clusters A list of globs to match the cluster.

If all three match, then the resource is ignored.

In addition to exclusions, you might configure the list of included resources using the resource.inclusions setting. By default, all resource group/kinds are included. The resource.inclusions setting allows customizing the list of included group/kinds:

  1. apiVersion: v1
  2. data:
  3. resource.inclusions: |
  4. - apiGroups:
  5. - "*"
  6. kinds:
  7. - Deployment
  8. clusters:
  9. - https://192.168.0.20
  10. kind: ConfigMap

The resource.inclusions and resource.exclusions might be used together. The final list of resources includes group/kinds specified in resource.inclusions minus group/kinds specified in resource.exclusions setting.

Notes:

  • Quote globs in your YAML to avoid parsing errors.
  • Invalid globs result in the whole rule being ignored.
  • If you add a rule that matches existing resources, these will appear in the interface as OutOfSync.

Auto respect RBAC for controller

Argocd controller can be restricted from discovering/syncing specific resources using just controller rbac, without having to manually configure resource exclusions. This feature can be enabled by setting resource.respectRBAC key in argocd cm, once it is set the controller will automatically stop watching for resources that it does not have the permission to list/access. Possible values for resource.respectRBAC are: - strict : This setting checks whether the list call made by controller is forbidden/unauthorized and if it is, it will cross-check the permission by making a SelfSubjectAccessReview call for the resource. - normal : This will only check whether the list call response is forbidden/unauthorized and skip SelfSubjectAccessReview call, to minimize any extra api-server calls. - unset/empty (default) : This will disable the feature and controller will continue to monitor all resources.

Users who are comfortable with an increase in kube api-server calls can opt for strict option while users who are concerned with higher api calls and are willing to compromise on the accuracy can opt for the normal option.

Notes:

  • When set to use strict mode controller must have rbac permission to create a SelfSubjectAccessReview resource
  • The SelfSubjectAccessReview request will be only made for the list verb, it is assumed that if list is allowed for a resource then all other permissions are also available to the controller.

Example argocd cm with resource.respectRBAC set to strict:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: argocd-cm
  5. data:
  6. resource.respectRBAC: "strict"

Resource Custom Labels

Custom Labels configured with resource.customLabels (comma separated string) will be displayed in the UI (for any resource that defines them).

Labels on Application Events

An optional comma-separated list of metadata.labels keys can be configured with resource.includeEventLabelKeys to add to Kubernetes events generated for Argo CD Applications. When events are generated for Applications containing the specified labels, the controller adds the matching labels to the event. This establishes an easy link between the event and the application, allowing for filtering using labels. In case of conflict between labels on the Application and AppProject, the Application label values are prioritized and added to the event.

  1. resource.includeEventLabelKeys: team,env*

To exclude certain labels from events, use the resource.excludeEventLabelKeys key, which takes a comma-separated list of metadata.labels keys.

  1. resource.excludeEventLabelKeys: environment,bu

Both resource.includeEventLabelKeys and resource.excludeEventLabelKeys support wildcards.

SSO & RBAC

  • SSO configuration details: SSO
  • RBAC configuration details: RBAC

Manage Argo CD Using Argo CD

Argo CD is able to manage itself since all settings are represented by Kubernetes manifests. The suggested way is to create Kustomize based application which uses base Argo CD manifests from https://github.com/argoproj/argo-cd and apply required changes on top.

Example of kustomization.yaml:

  1. # additional resources like ingress rules, cluster and repository secrets.
  2. resources:
  3. - github.com/argoproj/argo-cd//manifests/cluster-install?ref=stable
  4. - clusters-secrets.yaml
  5. - repos-secrets.yaml
  6. # changes to config maps
  7. patches:
  8. - path: overlays/argo-cd-cm.yaml

The live example of self managed Argo CD config is available at https://cd.apps.argoproj.io and with configuration stored at argoproj/argoproj-deployments.

Note

You will need to sign-in using your GitHub account to get access to https://cd.apps.argoproj.io